3 research outputs found
Exploiting spatial and temporal coherence in GPU-based volume rendering
Effizienz spielt eine wichtige Rolle bei der Darstellung von Volumendaten, selbst wenn leistungsstarke Grafikhardware zur Verfügung steht, da steigende Datensatzgrößen und höhere Anforderungen an Visualisierungstechniken Fortschritte bei Grafikprozessoren ausgleichen. In dieser Dissertation wird untersucht, wie räumliche und zeitliche Kohärenz in Volumendaten zur Optimierung von Volumenrendering genutzt werden kann. Es werden mehrere neue Ansätze für statische und zeitvariante Daten eingeführt, die verschieden Arten von Kohärenz in verschiedenen Stufen der Volumenrendering-Pipeline ausnutzen. Zu den vorgestellten Beschleunigungstechniken gehört Empty Space Skipping mittels Occlusion Frustums, eine auf Slabs basierende Cachestruktur für Raycasting und ein verlustfreies Kompressionsscheme für zeitvariante Daten. Die Algorithmen wurden zur Verwendung mit GPU-basiertem Volumen-Raycasting entworfen und nutzen die Fähigkeiten moderner Grafikprozessoren, insbesondere Stream Processing. Efficiency is a key aspect in volume rendering, even if powerful
graphics hardware is employed, since increasing data set sizes and
growing demands on visualization techniques outweigh improvements in
graphics processor performance. This dissertation examines how spatial
and temporal coherence in volume data can be used to optimize volume
rendering. Several new approaches for static as well as for time-varying
data sets are introduced, which exploit different types of coherence in
different stages of the volume rendering pipeline. The presented
acceleration algorithms include empty space skipping using occlusion
frustums, a slab-based cache structure for raycasting, and a lossless
compression scheme for time-varying data. The algorithms were designed
for use with GPU-based volume raycasting and to efficiently exploit the
features of modern graphics processors, especially stream processing
Unwind: Interactive Fish Straightening
The ScanAllFish project is a large-scale effort to scan all the world's
33,100 known species of fishes. It has already generated thousands of
volumetric CT scans of fish species which are available on open access
platforms such as the Open Science Framework. To achieve a scanning rate
required for a project of this magnitude, many specimens are grouped together
into a single tube and scanned all at once. The resulting data contain many
fish which are often bent and twisted to fit into the scanner. Our system,
Unwind, is a novel interactive visualization and processing tool which
extracts, unbends, and untwists volumetric images of fish with minimal user
interaction. Our approach enables scientists to interactively unwarp these
volumes to remove the undesired torque and bending using a piecewise-linear
skeleton extracted by averaging isosurfaces of a harmonic function connecting
the head and tail of each fish. The result is a volumetric dataset of a
individual, straight fish in a canonical pose defined by the marine biologist
expert user. We have developed Unwind in collaboration with a team of marine
biologists: Our system has been deployed in their labs, and is presently being
used for dataset construction, biomechanical analysis, and the generation of
figures for scientific publication
REAL-TIME RENDERING OF WEATHER-RELATED PHENOMENA IN DIGITAL 3D URBAN MODELS
Abstract: General interest in visualizations of digital 3D city models is growing rapidly, and several applications are already available that display such models very realistically. Many authors have emphasized the importance of the effects of realistic illumination for computer generated images, and this applies especially to the context of 3D city visualization. However, current 3D city visualization applications rarely implement techniques for achieving realistic illumination, in particular the effects caused by current weather-related phenomena. At most, some geospatial visualization systems render artificial skies—sometimes with a georeferenced determination of the sun position—to give the user the impression of a real sky. However, such artificial renderings are not sufficient for real simulation purposes. In this paper we present techniques to augment visualizations of digital 3D city models with real-time display of georeferenced meteorological phenomena. For this purpose we retrieve weather information from different sources, i. e., real-time images from cameras and radar data from web-based weather services, and we use this information in the rendering process for realistic visualization of different weather-related issues, such as clouds, rain, fog, etc. Our approach is not limited to a specific setup, and we have evaluated the results in a user study presented in this paper.